遗憾已被广泛用作评估分布式多代理系统在线优化算法的性能的首选指标。但是,与代理相关的数据/模型变化可以显着影响决策,并需要在代理之间达成共识。此外,大多数现有的作品都集中在开发(强烈或非严格地)凸出的方法上,对于一般非凸损失的分布式在线优化中的遗憾界限,几乎没有得到很少的结果。为了解决这两个问题,我们提出了一种新型的综合遗憾,并使用新的基于网络的基于遗憾的度量标准来评估分布式在线优化算法。我们具体地定义了复合遗憾的静态和动态形式。通过利用我们的综合遗憾的动态形式,我们开发了一种基于共识的在线归一化梯度(CONGD)的伪convex损失方法,事实证明,它显示了与最佳器路径变化的规律性术语有关的透明性行为。对于一般的非凸损失,我们首先阐明了基于最近进步的分布式在线非凸学习的遗憾,因此没有确定性算法可以实现sublinear的遗憾。然后,我们根据离线优化的Oracle开发了分布式的在线非凸优化(Dinoco),而无需进入梯度。迪诺科(Dinoco)被证明是统一的遗憾。据我们所知,这是对一般分布在线非convex学习的第一个遗憾。
translated by 谷歌翻译
在分布式深度学习的背景下,陈旧的权重或梯度的问题可能导致算法性能差。这个问题通常通过延迟耐受算法来解决,并在目标函数和步进尺寸上有一些温和的假设。在本文中,我们提出了一种不同的方法来开发一种新算法,称为$ \ textbf {p} $ redicting $ \ textbf {c} $ lipping $ \ textbf {a} $ synchronous $ \ textbf {s} textbf {g} $ radient $ \ textbf {d} $ escent(aka,pc-asgd)。具体而言,PC -ASGD有两个步骤 - $ \ textIt {预测步骤} $利用泰勒扩展利用梯度预测来减少过时的权重的稳固性,而$ \ textit {clivipping step} $选择性地降低了过时的权重,以减轻过时的权重他们的负面影响。引入权衡参数以平衡这两个步骤之间的影响。从理论上讲,考虑到平滑的物镜函数弱键和非凸的延迟延迟的延迟,我们介绍了收敛速率。还提出了一种实用的PC-ASGD变体,即采用条件来帮助确定权衡参数。对于经验验证,我们在两个基准数据集上使用两个深神经网络体系结构演示了该算法的性能。
translated by 谷歌翻译
我们提出了一种新的多功能增强学习的新型政策梯度方法,其利用了两个不同的差异减少技术,并且不需要在迭代上进行大量批次。具体而言,我们提出了一种基于势头的分散策略梯度跟踪(MDPGT),其中使用新的基于动量的方差减少技术来接近具有重要性采样的本地策略梯度代理,并采用中间参数来跟踪两个连续的策略梯度代理。此外,MDPGT可证明$ \ mathcal {o}的最佳可用样本复杂性(n ^ { - 1} \ epsilon ^ {-3})$,用于汇聚到全球平均值的$ \ epsilon $ -stationary点n $本地性能函数(可能是非旋转)。这优于在分散的无模型增强学习中的最先进的样本复杂性,并且当用单个轨迹初始化时,采样复杂性与现有的分散的政策梯度方法获得的样本复杂性匹配。我们进一步验证了高斯策略函数的理论索赔。当所需的误差容忍$ \ epsilon $足够小时,MDPGT导致线性加速,以前已经在分散的随机优化中建立,但不是为了加强学习。最后,我们在多智能体增强学习基准环境下提供了实证结果,以支持我们的理论发现。
translated by 谷歌翻译
随着天文学中检测到的瞬变数量的迅速增加,基于机器学习的分类方法正在越来越多地使用。他们的目标通常是要获得瞬态的确定分类,并且出于良好的性能,他们通常需要存在大量观察。但是,精心设计,有针对性的模型可以通过更少的计算资源来达到其分类目标。本文介绍了Snguess,该模型旨在找到高纯度附近的年轻外乳旋转瞬变。 Snguess可以使用一组功能,这些功能可以从天文警报数据中有效计算。其中一些功能是静态的,并且与警报元数据相关联,而其他功能必须根据警报中包含的光度观测值计算。大多数功能都足够简单,可以在其检测后的瞬态生命周期的早期阶段获得或计算。我们为从Zwicky Transient设施(ZTF)的一组标记的公共警报数据计算了这些功能。 Snguess的核心模型由一组决策树组成,这些集合是通过梯度提升训练的。 SNGUESS建议的候选人中约有88%的ZTF从2020年4月至2021年8月的一组警报中被发现是真正的相关超新星(SNE)。对于具有明亮检测的警报,此数字在92%至98%之间。自2020年4月以来,Snguess确定为ZTF Alert流中潜在SNE的瞬变已发布到AMPEL_ZTF_NEW组标识符下的瞬态名称服务器(TNS)。可以通过Web服务访问ZTF观察到的任何暂时性的SNGUESS分数。 Snguess的源代码可公开使用。
translated by 谷歌翻译
通用形态(UNIMORPH)项目是一项合作的努力,可为数百种世界语言实例化覆盖范围的标准化形态拐角。该项目包括两个主要的推力:一种无独立的特征架构,用于丰富的形态注释,并以各种语言意识到该模式的各种语言的带注释数据的类型级别资源。本文介绍了过去几年对几个方面的扩张和改进(自McCarthy等人(2020年)以来)。众多语言学家的合作努力增加了67种新语言,其中包括30种濒危语言。我们已经对提取管道进行了一些改进,以解决一些问题,例如缺少性别和马克龙信息。我们还修改了模式,使用了形态学现象所需的层次结构,例如多肢体协议和案例堆叠,同时添加了一些缺失的形态特征,以使模式更具包容性。鉴于上一个UniMorph版本,我们还通过16种语言的词素分割增强了数据库。最后,这个新版本通过通过代表来自metphynet的派生过程的实例丰富数据和注释模式来推动将衍生物形态纳入UniMorph中。
translated by 谷歌翻译
变压器语言模型的大规模自我监督的预培训已经推进了自然语言处理领域,并在跨申请中显示了蛋白质和DNA的生物“语言”的承诺。学习使用大型基因组序列的DNA序列的有效表示可以通过转移学习加速基因调控模型的发展。然而,为了精确模拟特异性细胞类型的基因调节和功能,不仅需要考虑DNA核苷酸序列中包含的信息,这主要是细胞类型之间的不变性,还要考虑局部化学和结构“表观遗传状态”染色体在细胞类型之间变化。这里,我们引入来自变压器(BERT)模型的双向编码器表示,该模型基于DNA序列和配对的表观遗传状态输入来学习表示,我们称之为表观脑栓(或ebert)。我们在整个人类基因组中使用蒙面语言模型目标以及跨越127种细胞类型预先列车。通过与脑系统的合作伙伴关系,第一次培训这种复杂模型,首次通过与脑系统合作,其CS-1系统提供所有预训练实验。我们通过展示细胞类型特定的转录因子绑定预测任务的强大性能来显示Ebert的转移学习潜力。我们的微调模型超过了来自编码梦想基准的13个评估数据集中的4个艺术表现的状态,并在挑战排行榜上获得3号的整体排名。我们探讨了表观遗传数据和任务特定功能增强的如何纳入影响转移学习绩效。
translated by 谷歌翻译
We propose a new causal inference framework to learn causal effects from multiple, decentralized data sources in a federated setting. We introduce an adaptive transfer algorithm that learns the similarities among the data sources by utilizing Random Fourier Features to disentangle the loss function into multiple components, each of which is associated with a data source. The data sources may have different distributions; the causal effects are independently and systematically incorporated. The proposed method estimates the similarities among the sources through transfer coefficients, and hence requiring no prior information about the similarity measures. The heterogeneous causal effects can be estimated with no sharing of the raw training data among the sources, thus minimizing the risk of privacy leak. We also provide minimax lower bounds to assess the quality of the parameters learned from the disparate sources. The proposed method is empirically shown to outperform the baselines on decentralized data sources with dissimilar distributions.
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
Independent component analysis (ICA) is a blind source separation method to recover source signals of interest from their mixtures. Most existing ICA procedures assume independent sampling. Second-order-statistics-based source separation methods have been developed based on parametric time series models for the mixtures from the autocorrelated sources. However, the second-order-statistics-based methods cannot separate the sources accurately when the sources have temporal autocorrelations with mixed spectra. To address this issue, we propose a new ICA method by estimating spectral density functions and line spectra of the source signals using cubic splines and indicator functions, respectively. The mixed spectra and the mixing matrix are estimated by maximizing the Whittle likelihood function. We illustrate the performance of the proposed method through simulation experiments and an EEG data application. The numerical results indicate that our approach outperforms existing ICA methods, including SOBI algorithms. In addition, we investigate the asymptotic behavior of the proposed method.
translated by 谷歌翻译
We study the fundamental task of outlier-robust mean estimation for heavy-tailed distributions in the presence of sparsity. Specifically, given a small number of corrupted samples from a high-dimensional heavy-tailed distribution whose mean $\mu$ is guaranteed to be sparse, the goal is to efficiently compute a hypothesis that accurately approximates $\mu$ with high probability. Prior work had obtained efficient algorithms for robust sparse mean estimation of light-tailed distributions. In this work, we give the first sample-efficient and polynomial-time robust sparse mean estimator for heavy-tailed distributions under mild moment assumptions. Our algorithm achieves the optimal asymptotic error using a number of samples scaling logarithmically with the ambient dimension. Importantly, the sample complexity of our method is optimal as a function of the failure probability $\tau$, having an additive $\log(1/\tau)$ dependence. Our algorithm leverages the stability-based approach from the algorithmic robust statistics literature, with crucial (and necessary) adaptations required in our setting. Our analysis may be of independent interest, involving the delicate design of a (non-spectral) decomposition for positive semi-definite matrices satisfying certain sparsity properties.
translated by 谷歌翻译